
for serendipity in query results. Search can be smart, but query must be dumb and strictly
obedient[7].
There are mainly two reasons for not getting the right query result. The first is that
we simply do not have the data to answer it. We generally try to find the best interpretation
of your question based on the data we use. If we do not have the data also the interpretation
will be wrong. The second reason for wrong query result is that we have the data but we
are not able to correctly interpret your question.Here’s an instance of the same. We are
trying to improve over time the quality of the QA system by adding new datasets and by
refining the algorithm that interprets your question. So hopefully next time it works.
A big advantage over traditional search engines is that different information
can be combined so that we can answer questions like ’Who was a student of Alonzo
Church?’.QA system makes a formal database query,which is addressed in formal terms
to a specific dataset.Our underlying datasets are Wikidata and openstreetmap data. Wiki-
data contains structured knowledge about many existing entities like the European Union.
It contains information about the capital is Brussel. This information is converted into the
triple”European Union” ”capital” ”Brussel” allowing us to answer a question like ’What
is the capital of the EU?’.
We have an open API: curl –data ”query=Who is the wife of Barack Obama”
http://QAnswer-core1.univ-st-etienne.fr/api/gerbil which support the following param-
eters:query: for the questions the language supported currently are en, fr, de, it, es,
zh.Moreover, the knowledge-base that are currently supported are dbpedia, wikidata,
dblp, and freebase.
4. Presentation of the research problem
The research community has made a lot of efforts to use the computer vision techniques
for extracting knowledge from images. On the other side, not much attention has been
paid to the implementation of innovative methods for making this knowledge available.We
hope to change this trend by Semantic Web techniques for querying the knowledge made
available by computer vision. My work focuses on bridging the two disciplines here.
5. State of the art
Let’s try to understand how does a Google image search engine work. There are three
ways in this.
1) Indexing the text surrounding any image and matching it with the given query.
If query matches, the corresponding linked image is retrieved. 2)Usage of object identi-
fication techniques and annotating the images with the name of these objects. 3)Linking
all visually similar images to the image with the same text. e.g. consider an image Img1
on any site with it’s surrounding text Txt1. And lets say there are some other images
Img2,Img3,Img4 etc. which may or may not have text but their (visual) content matches
with the contents of Img1. Now for given query, if Txt1 is a good match, the retrieved
result can contain Img1 in addition to Img2, Img3, Img4, etc. This is just one factor in
addition to many other like matching query with text, features used to represent an image,
page-rank of page containing an image, relevance, indexed database size available with